Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 16: 846117, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35546888

RESUMO

Older adults process emotions in speech differently than do young adults. However, it is unclear whether these age-related changes impact all speech channels to the same extent, and whether they originate from a sensory or a cognitive source. The current study adopted a psychophysical approach to directly compare young and older adults' sensory thresholds for emotion recognition in two channels of spoken-emotions: prosody (tone) and semantics (words). A total of 29 young adults and 26 older adults listened to 50 spoken sentences presenting different combinations of emotions across prosody and semantics. They were asked to recognize the prosodic or semantic emotion, in separate tasks. Sentences were presented on the background of speech-spectrum noise ranging from SNR of -15 dB (difficult) to +5 dB (easy). Individual recognition thresholds were calculated (by fitting psychometric functions) separately for prosodic and semantic recognition. Results indicated that: (1). recognition thresholds were better for young over older adults, suggesting an age-related general decrease across channels; (2). recognition thresholds were better for prosody over semantics, suggesting a prosodic advantage; (3). importantly, the prosodic advantage in thresholds did not differ between age groups (thus a sensory source for age-related differences in spoken-emotions processing was not supported); and (4). larger failures of selective attention were found for older adults than for young adults, indicating that older adults experienced larger difficulties in inhibiting irrelevant information. Taken together, results do not support a sole sensory source, but rather an interplay of cognitive and sensory sources for age-related differences in spoken-emotions processing.

2.
Int J Audiol ; 60(5): 319-321, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33063553

RESUMO

OBJECTIVE: COVID-19 social isolation restrictions have accelerated the need to adapt clinical assessment tools to telemedicine. Remote adaptations are of special importance for populations at risk, e.g. older adults and individuals with chronic medical comorbidities. In response to this urgent clinical and scientific need, we describe a remote adaptation of the T-RES (Oron et al. 2020; IJA), designed to assess the complex processing of spoken emotions, based on identification and integration of the semantics and prosody of spoken sentences. DESIGN: We present iT-RES, an online version of the speech-perception assessment tool, detailing the challenges considered and solution chosen when designing the telehealth tool. We show a preliminary validation of performance against the original lab-based T-RES. STUDY SAMPLE: A between-participants design, within two groups of 78 young adults (T-RES, n = 39; iT-RES, n = 39). RESULTS: i-TRES performance closely followed that of T-RES, with no group differences found in the main trends, identification of emotions, selective attention, and integration. CONCLUSIONS: The design of iT-RES mapped the main challenges for remote auditory assessments, and solutions taken to address them. We hope that this will encourage further efforts for telehealth adaptations of clinical services, to meet the needs of special populations and avoid halting scientific research.


Assuntos
Audiologia/métodos , Audiometria da Fala/métodos , COVID-19 , Telemedicina/métodos , Reconhecimento de Voz , Adulto , Atenção , Emoções , Feminino , Humanos , Masculino , Quarentena , SARS-CoV-2 , Semântica , Percepção da Fala , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...